Neural Computing with Small Weights

نویسندگان

  • Kai-Yeung Siu
  • Jehoshua Bruck
چکیده

J ehoshua Bruck IBM Research Division Almaden Research Center San Jose, CA 95120-6099 An important issue in neural computation is the dynamic range of weights in the neural networks. Many experimental results on learning indicate that the weights in the networks can grow prohibitively large with the size of the inputs. Here we address this issue by studying the tradeoffs between the depth and the size of weights in polynomial-size networks of linear threshold elements (LTEs). We show that there is an efficient way of simulating a network of LTEs with large weights by a network of LTEs with small weights. In particular, we prove that every depth-d, polynomial-size network of LTEs with exponentially large integer weights can be simulated by a depth-(2d + 1), polynomial-size network of LTEs with polynomially bounded integer weights. To prove these results, we use tools from harmonic analysis of Boolean functions. Our technique is quite general, it provides insights to some other problems. For example, we are able to improve the best known results on the depth of a network of linear threshold elements that computes the COM PARI SO N, SUM and PRO DU CT of two n-bits numbers, and the MAX 1M U M and the SORTING of n n-bit numbers.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Numerical solution of fuzzy linear Fredholm integro-differential equation by \fuzzy neural network

In this paper, a novel hybrid method based on learning algorithmof fuzzy neural network and Newton-Cotesmethods with positive coefficient for the solution of linear Fredholm integro-differential equation of the second kindwith fuzzy initial value is presented. Here neural network isconsidered as a part of large field called neural computing orsoft computing. We propose alearning algorithm from ...

متن کامل

Numerical solution of fuzzy differential equations under generalized differentiability by fuzzy neural network

In this paper, we interpret a fuzzy differential equation by using the strongly generalized differentiability concept. Utilizing the Generalized characterization Theorem. Then a novel hybrid method based on learning algorithm of fuzzy neural network for the solution of differential equation with fuzzy initial value is presented. Here neural network is considered as a part of large eld called ne...

متن کامل

Learning Document Image Features With SqueezeNet Convolutional Neural Network

The classification of various document images is considered an important step towards building a modern digital library or office automation system. Convolutional Neural Network (CNN) classifiers trained with backpropagation are considered to be the current state of the art model for this task. However, there are two major drawbacks for these classifiers: the huge computational power demand for...

متن کامل

Adaptive Quaternion Attitude Control of Aerodynamic Flight Control Vehicles

Conventional quaternion based methods have been extensively employed for spacecraft attitude control where the aerodynamic forces can be neglected. In the presence of aerodynamic forces, the flight attitude control is more complicated due to aerodynamic moments and inertia uncertainties. In this paper, a robust nero-adaptive quat...

متن کامل

Restricted Recurrent Neural Tensor Networks

Increasing the capacity of recurrent neural networks (RNN) usually involves augmenting the size of the hidden layer, resulting in a significant increase of computational cost. An alternative is the recurrent neural tensor network (RNTN), which increases capacity by employing distinct hidden layer weights for each vocabulary word. However, memory usage scales linearly with vocabulary size, which...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1991